73 research outputs found

    Rough Set Based Approach for IMT Automatic Estimation

    Get PDF
    Carotid artery (CA) intima-media thickness (IMT) is commonly deemed as one of the risk marker for cardiovascular diseases. The automatic estimation of the IMT on ultrasound images is based on the correct identification of the lumen-intima (LI) and media-adventitia (MA) interfaces. This task is complicated by noise, vessel morphology and pathology of the carotid artery. In a previous study we applied four non-linear methods for feature selection on a set of variables extracted from ultrasound carotid images. The main aim was to select those parameters containing the highest amount of information useful to classify the image pixels in the carotid regions they belong to. In this study we present a pixel classifier based on the selected features. Once the pixels classification was correctly performed, the IMT was evaluated and compared with two sets of manual-traced profiles. The results showed that the automatic IMTs are not statistically different from the manual one

    Using Landscape Pattern Metrics to Characterize Ecoregions

    Get PDF
    Ecological regions, or ecoregions, are areas that exhibit “relative homogeneity in ecosystems”. The principal objective of this research was to determine if and how landscape structure (quantified by landscape pattern metrics) may be related to ecoregions defined using Omernik’s approach to ecoregionalization. Nine key landscape pattern metrics (number or LULC classes and the proportion of each class, number of patches, mean patch size and area-weighted fractal dimension, perimeter-area fractal dimension, contagion, mean Euclidean nearest neighbor distance and interspersion and juxtaposition index) where used to asses landscape structure in a sample of 26 Omernik Level III ecoregions located in the central United States. The results indicated that the behavior of most of the metrics (such as Number of Patches, Mean Patch Size, Mean Euclidean Nearest Neighbor, and Contagion) could only be explained when they were considered in context with the other metrics. There were significant correlations among several of the metrics used, reasserting the redundancy of information provided by some of these indices. Adviser: James Merchan

    Key Aspects to Teach Medical Device Software Certification

    Get PDF
    Certification of Medical Device Software (MDS) according to the EU Medical Device Regulation 2017/745 requires demonstrating safety and effectiveness. Thus, the syllabus of a course on MDS development must provide tools for addressing these issues. To assure safety, risk analysis has to be performed using a four-step procedure. Effectiveness could be demonstrated by literature systematic review combined with meta-analysis, to compare the MDS performances with those of similar tools

    A Wearable Multi-Sensor Array Enables the Recording of Heart Sounds in Homecare

    Get PDF
    The home monitoring of patients affected by chronic heart failure (CHF) is of key importance in preventing acute episodes. Nevertheless, no wearable technological solution exists to date. A possibility could be offered by Cardiac Time Intervals extracted from simultaneous recordings of electrocardiographic (ECG) and phonocardiographic (PCG) signals. Nevertheless, the recording of a good-quality PCG signal requires accurate positioning of the stethoscope over the chest, which is unfeasible for a naïve user as the patient. In this work, we propose a solution based on multi-source PCG. We designed a flexible multi-sensor array to enable the recording of heart sounds by inexperienced users. The multi-sensor array is based on a flexible Printed Circuit Board mounting 48 microphones with a high spatial resolution, three electrodes to record an ECG and a Magneto-Inertial Measurement Unit. We validated the usability over a sample population of 42 inexperienced volunteers and found that all subjects could record signals of good to excellent quality. Moreover, we found that the multi-sensor array is suitable for use on a wide population of at-risk patients regardless of their body characteristics. Based on the promising findings of this study, we believe that the described device could enable the home monitoring of CHF patients soon

    Comparison of different similarity measures in hierarchical clustering

    Get PDF
    The management of datasets containing heterogeneous types of data is a crucial point in the context of precision medicine, where genetic, environmental, and life-style information of each individual has to be analyzed simultaneously. Clustering represents a powerful method, used in data mining, for extracting new useful knowledge from unlabeled datasets. Clustering methods are essentially distance-based, since they measure the similarity (or the distance) between two elements or one element and the cluster centroid. However, the selection of the distance metric is not a trivial task: it could influence the clustering results and, thus, the extracted information. In this study we analyze the impact of four similarity measures (Manhattan or L1 distance, Euclidean or L2 distance, Chebyshev or L∞ distance and Gower distance) on the clustering results obtained for datasets containing different types of variables. We applied hierarchical clustering combined with an automatic cut point selection method to six datasets publicly available on the UCI Repository. Four different clusterizations were obtained for every dataset (one for each distance) and were analyzed in terms of number of clusters, number of elements in each cluster, and cluster centroids. Our results showed that changing the distance metric produces substantial modifications in the obtained clusters. This behavior is particularly evident for datasets containing heterogeneous variables. Thus, the choice of the distance measure should not be done a-priori but evaluated according to the set of data to be analyzed and the task to be accomplished

    Gait Impairment Score: A Fuzzy Logic-Based Index for Gait Assessment

    Get PDF
    The objective assessment of subject’s gait impairment is a complicated task. For this reason, several indices have been proposed in literature for achieving this purpose, taking into account different gait parameters. All of them were essentially based on the identification of “normality ranges” for the gait parameters of interest or of a “normal population”. However, it is not trivial to obtain a unique definition of “normal gait”. In this study we proposed the Gait Impairment Score (GIS) that is a novel index to evaluate the subject’s gait impairment level based on fuzzy logic. This index was obtained combining two Fuzzy Inference Systems (FISs), based on gait phases (GP) and knee joint kinematics (JK) parameters, respectively. Eight GP parameters and ten JK parameters were extracted from the basographic and knee kinematic signals, respectively. Those signals were acquired, for each subject’s lower limb, using a set of wearable sensors connected to a commercial system for gait analysis. Each parameter was used as input variable of the corresponding FIS. The output variable of the two FISs represented the impairment level from the GP and JK point of view. GP-FIS and JK-FIS were applied separately to both right and left leg parameters. Then, the fuzzy outputs of the two FISs were aggregated, independently for each side, to obtain the leg fuzzy output. The final subject’s GIS was obtained aggregating the fuzzy outputs of the two legs. The score was validated against two gait analysis experts on a population of 12 subjects both with and without walking pathologies. The Analytic Hierarchy Process (AHP) pairwise comparisons were used to obtain the subjects’ ranking from the two experts. The same population was scored using the GIS and ordered in ascending order. Comparing the three rankings (from our system and from the two human experts) it emerged that our system gives the same “judgment” of a human expert

    Could normalization improve robustness of abdominal MRI radiomic features?

    Get PDF
    Radiomics-based systems could improve the management of oncological patients by supporting cancer diagnosis, treatment planning, and response assessment. However, one of the main limitations of these systems is the generalizability and reproducibility of results when they are applied to images acquired in different hospitals by different scanners. Normalization has been introduced to mitigate this issue, and two main approaches have been proposed: one rescales the image intensities (image normalization), the other the feature distributions for each center (feature normalization). The aim of this study is to evaluate how different image and feature normalization methods impact the robustness of 93 radiomics features acquired using a multicenter and multi-scanner abdominal Magnetic Resonance Imaging (MRI) dataset. To this scope, 88 rectal MRIs were retrospectively collected from 3 different institutions (4 scanners), and for each patient, six 3D regions of interest on the obturator muscle were considered. The methods applied were min-max, 1st-99th percentiles and 3-Sigma normalization, z-score standardization, mean centering, histogram normalization, Nyul-Udupa and ComBat harmonization. The Mann-Whitney U-test was applied to assess features repeatability between scanners, by comparing the feature values obtained for each normalization method, including the case in which no normalization was applied. Most image normalization methods allowed to reduce the overall variability in terms of intensity distributions, while worsening or showing unpredictable results in terms of feature robustness, except for the z-score, which provided a slight improvement by increasing the number of statistically similar features from 9/93 to 10/93. Conversely, feature normalization methods positively reduced the overall variability across the scanners, in particular, 3sigma, z_score and ComBat that increased the number of similar features (79/93). According to our results, it emerged that none of the image normalization methods was able to strongly increase the number of statistically similar features

    Normalization strategies in multi-center radiomics abdominal MRI: systematic review and meta-analyses

    Get PDF
    Goal: Artificial intelligence applied to medical image analysis has been extensively used to develop non-invasive diagnostic and prognostic signatures. However, these imaging biomarkers should be largely validated on multi-center datasets to prove their robustness before they can be introduced into clinical practice. The main challenge is represented by the great and unavoidable image variability which is usually addressed using different pre-processing techniques including spatial, intensity and feature normalization. The purpose of this study is to systematically summarize normalization methods and to evaluate their correlation with the radiomics model performances through meta-analyses. This review is carried out according to the PRISMA statement: 4777 papers were collected, but only 74 were included. Two meta-analyses were carried out according to two clinical aims: characterization and prediction of response. Findings of this review demonstrated that there are some commonly used normalization approaches, but not a commonly agreed pipeline that can allow to improve performance and to bridge the gap between bench and bedside

    Surface Electromyography Applied to Gait Analysis: How to Improve Its Impact in Clinics?

    Get PDF
    Surface electromyography (sEMG) is the main non-invasive tool used to record the electrical activity of muscles during dynamic tasks. In clinical gait analysis, a number of techniques have been developed to obtain and interpret the muscle activation patterns of patients showing altered locomotion. However, the body of knowledge described in these studies is very seldom translated into routine clinical practice. The aim of this work is to analyze critically the key factors limiting the extensive use of these powerful techniques among clinicians. A thorough understanding of these limiting factors will provide an important opportunity to overcome limitations through specific actions, and advance toward an evidence-based approach to rehabilitation based on objective findings and measurements

    Automatic Identification of the Best Auscultation Area for the Estimation of the Time of Closure of Heart Valves through Multi-Source Phonocardiography

    Get PDF
    In the latest years, multi-source phonocardiography (PCG) is gaining interest in relation to the home monitoring of cardiovascular diseases. An application of interest regards the monitoring of the time of closure of the four cardiac valves, which would enable the follow-up of at-risk patients for heart failure. In this work, we propose a hybrid system based on hierarchical clustering and Multi-Criteria Decision Analysis (MCDA) for automatically selecting the best auscultation area for the mentioned application through multi-source PCG. We simultaneously recorded 48 PCG signals from the subject's chest and divided them into morphologically homogenous groups using agglomerative hierarchical clustering, based on their correlation. Then, we explored three different approaches to select the best auscultation area, based respectively on the minimum latency, on the maximum signal-to-noise ratio, and on multiple criteria using ELECTRE III. The results obtained on the follow-up of a healthy subject over consecutive days show that a) the selection of the auscultation area using MCDA overcomes the limits of single-criteria approaches, b) the estimate of the time of closure of the heart valves using the proposed system is more robust than what obtained through the state-of-the-art single-source methodology
    corecore